INTRODUCTION
For the last few years, the Naval Research laboratory has been attempting to build robots that are similar to humans in a variety of ways. The goal has been to build systems that think and act like a person rather than look like a person, because the state of the art is not sufficient for a robot to look (even superficially) like a human person. There are at least two reasons to build robots that think and act like a human. First, how an artificial system acts has a profound effect on how people act towards the system. Second, how an artificial system thinks has a profound effect on how people interact with the system.
HOW PEOPLE ACT TOWARDS ARTIFICIAL SYSTEMS
“Everyone” knows that computers have no feelings, attitudes, or desires. Most people do not worry about hurting a toaster's feelings or cursing at a VCR. However, in a surprising series of studies, Cliff Nass has shown that people in some situations do, in fact, treat computer systems as social entities. Nass has shown that it takes very little “social-ness” for a person to treat computers (including robots, AI programs, etc.) as social creatures. For example, Nass and Moon (2000) examined people's application of social categories to computers. Nass and Moon (2000) compared users' interactions with two computer systems – a tutor and an evaluator – using different combinations of male and female voices. Even though the participants indicated that they knew they were interacting with a computer, and explicitly reported that the voice did not relate to the “gender” of the computer, or even the computer programmer, there were distinct genderrelated biases in the experiment data.